Given a natural language that describes the user's demands, the NL2Code task aims to generate code that addresses the demands. This is a critical but challenging task that mirrors the capabilities of AI-powered programming. The NL2Code task is inherently versatile, diverse and complex. For example, a demand can be described in different languages, in different formats, and at different levels of granularity. This inspired us to do this survey for NL2Code. In this survey, we focus on how does neural network (NN) solves NL2Code. We first propose a comprehensive framework, which is able to cover all studies in this field. Then, we in-depth parse the existing studies into this framework. We create an online website to record the parsing results, which tracks existing and recent NL2Code progress. In addition, we summarize the current challenges of NL2Code as well as its future directions. We hope that this survey can foster the evolution of this field.
translated by 谷歌翻译
给定编程问题,预训练的语言模型(例如Codex)证明了通过采样生成多个不同代码解决方案的能力。但是,从这些样本中选择正确或最佳解决方案仍然是一个挑战。尽管验证代码解决方案正确性的一种简单方法是通过执行测试用例,但生产高质量的测试用例非常昂贵。在本文中,我们探讨了使用预训练的语言模型自动生成测试用例,称我们的方法Codet:使用生成测试的代码生成。 CODET使用生成的测试用例执行代码解决方案,然后根据与生成的测试用例和其他生成的解决方案的双重执行协议选择最佳解决方案。我们在五个具有HumaneVal和MBPP基准的不同预训练模型上评估Codet。广泛的实验结果表明,Codet可以实现对以前方法的显着,一致且令人惊讶的改进。例如,CODET将HOMANEVAL的通行证提高到65.8%,在Code-Davinci-002模型上的绝对增长率为18.8%,并且比以前的最新结果相比,绝对20+%提高。
translated by 谷歌翻译
代码生成是一个长期的挑战,旨在根据自然语言描述生成代码段。通常,昂贵的文本编码配对数据对于培训代码生成模型至关重要。最近,由于培训预培训技术的成功,大型语言模型接受了大规模未标记的代码语料库的培训,并在代码生成方面表现良好。在本文中,我们调查了如何利用未标记的代码语料库来训练以图书馆为导向的代码生成的模型。由于对于程序员重复使用第三方库是一种普遍的做法,因此由于库数量大量,文本编码配对数据很难获得。我们观察到面向图书馆的代码片段更有可能共享类似的代码草图。因此,我们为证书提供了两个步骤:草图器生成草图,然后发电机填充了草图中的详细信息。 Sketcher和Generator都使用未标记的数据在基本模型上不断预先训练。此外,我们制作了两个名为Pandaseval和NumpyeVal的基准,以评估面向图书馆的代码生成。实验结果证明了CERT的表现令人印象深刻。例如,它超过了基本模型,在pandaseval上的Pass@1方面,绝对提高了15.67%。我们的工作可在https://github.com/microsoft/pycodegpt上获得。
translated by 谷歌翻译
In this paper, we study the use of deep Transformer translation model for the CCMT 2022 Chinese-Thai low-resource machine translation task. We first explore the experiment settings (including the number of BPE merge operations, dropout probability, embedding size, etc.) for the low-resource scenario with the 6-layer Transformer. Considering that increasing the number of layers also increases the regularization on new model parameters (dropout modules are also introduced when using more layers), we adopt the highest performance setting but increase the depth of the Transformer to 24 layers to obtain improved translation quality. Our work obtains the SOTA performance in the Chinese-to-Thai translation in the constrained evaluation.
translated by 谷歌翻译
Current computer vision models, unlike the human visual system, cannot yet achieve general-purpose visual understanding. Existing efforts to create a general vision model are limited in the scope of assessed tasks and offer no overarching framework to perform them holistically. We present a new comprehensive benchmark, General-purpose Visual Understanding Evaluation (G-VUE), covering the full spectrum of visual cognitive abilities with four functional domains $\unicode{x2014}$ Perceive, Ground, Reason, and Act. The four domains are embodied in 11 carefully curated tasks, from 3D reconstruction to visual reasoning and manipulation. Along with the benchmark, we provide a general encoder-decoder framework to allow for the evaluation of arbitrary visual representation on all 11 tasks. We evaluate various pre-trained visual representations with our framework and observe that (1) Transformer-based visual backbone generally outperforms CNN-based backbone on G-VUE, (2) visual representations from vision-language pre-training are superior to those with vision-only pre-training across visual tasks. With G-VUE, we provide a holistic evaluation standard to motivate research toward building general-purpose visual systems via obtaining more general-purpose visual representations.
translated by 谷歌翻译
Hybrid tabular-textual question answering (QA) requires reasoning from heterogeneous information, and the types of reasoning are mainly divided into numerical reasoning and span extraction. Despite being the main challenge of the task compared to extractive QA, current numerical reasoning method simply uses LSTM to autoregressively decode program sequences, and each decoding step produces either an operator or an operand. However, the step-by-step decoding suffers from exposure bias, and the accuracy of program generation drops sharply with progressive decoding. In this paper, we propose a non-autoregressive program generation framework, which facilitates program generation in parallel. Our framework, which independently generates complete program tuples containing both operators and operands, can significantly boost the speed of program generation while addressing the error accumulation issue. Our experiments on the MultiHiertt dataset shows that our model can bring about large improvements (+7.97 EM and +6.38 F1 points) over the strong baseline, establishing the new state-of-the-art performance, while being much faster (21x) in program generation. The performance drop of our method is also significantly smaller than the baseline with increasing numbers of numerical reasoning steps.
translated by 谷歌翻译
随着几个行业正在朝着建模大规模的3D虚拟世界迈进,因此需要根据3D内容的数量,质量和多样性来扩展的内容创建工具的需求变得显而易见。在我们的工作中,我们旨在训练Parterant 3D生成模型,以合成纹理网格,可以通过3D渲染引擎直接消耗,因此立即在下游应用中使用。 3D生成建模的先前工作要么缺少几何细节,因此在它们可以生成的网格拓扑中受到限制,通常不支持纹理,或者在合成过程中使用神经渲染器,这使得它们在常见的3D软件中使用。在这项工作中,我们介绍了GET3D,这是一种生成模型,该模型直接生成具有复杂拓扑,丰富几何细节和高保真纹理的显式纹理3D网格。我们在可区分的表面建模,可区分渲染以及2D生成对抗网络中桥接了最新成功,以从2D图像集合中训练我们的模型。 GET3D能够生成高质量的3D纹理网格,从汽车,椅子,动物,摩托车和人类角色到建筑物,对以前的方法进行了重大改进。
translated by 谷歌翻译
我们描述了JD Explore Academy对WMT 2022共享的一般翻译任务的提交。我们参加了所有高资源曲目和一条中型曲目,包括中文英语,德语英语,捷克语英语,俄语 - 英语和日语英语。我们通过扩大两个主要因素,即语言对和模型大小,即\ textbf {vega-mt}系统来推动以前的工作的极限 - 进行翻译的双向培训。至于语言对,我们将“双向”扩展到“多向”设置,涵盖所有参与语言,以利用跨语言的常识,并将其转移到下游双语任务中。至于型号尺寸,我们将变压器限制到拥有近47亿参数的极大模型,以完全增强我们VEGA-MT的模型容量。此外,我们采用数据增强策略,例如单语数据的循环翻译以及双语和单语数据的双向自我训练,以全面利用双语和单语言数据。为了使我们的Vega-MT适应通用域测试集,设计了概括调整。根据受约束系统的官方自动分数,根据图1所示的sacrebleu,我们在{zh-en(33.5),en-zh(49.7)(49.7),de-en(33.7)上获得了第一名-de(37.8),CS-EN(54.9),En-CS(41.4)和En-Ru(32.7)},在{ru-en(45.1)和Ja-en(25.6)}和第三名上的第二名和第三名在{en-ja(41.5)}上; W.R.T彗星,我们在{zh-en(45.1),en-zh(61.7),de-en(58.0),en-de(63.2),cs-en(74.7),ru-en(ru-en(ru-en)上,我们获得了第一名64.9),en-ru(69.6)和en-ja(65.1)},分别在{en-cs(95.3)和ja-en(40.6)}上的第二名。将发布模型,以通过GitHub和Omniforce平台来促进MT社区。
translated by 谷歌翻译
文本表示的预培训(PT)已成功应用于低资源神经机器翻译(NMT)。但是,它通常无法在资源丰富的NMT上获得显着的收益(有时甚至更糟),与其随机定位(RI)对应物相当。我们迈出了第一步,通过两个探测分析来研究资源丰富的场景中PT和RI之间的互补性,并发现:1)PT并不提高准确性,而是通过实现平坦的损失景观而不是RI的概括。 2)PT不是提高词汇选择的信心,而是通过分配更平滑的词汇概率分布而不是RI的词汇分布来提高词汇选择的信心。基于这些见解,我们建议将它们的互补性与模型融合算法相结合,该算法利用最佳传输来对齐PT和RI之间的神经元。对两个资源丰富的翻译基准的实验,WMT'17英语 - 中国(20m)和WMT'19英语 - 德国人(36m),表明PT和RI可以彼此很好地互补,可以实现实质性的改进,考虑到这两个翻译准确性,考虑到同时的翻译准确性,概括和负多样性。探测工具和代码的发布:https://github.com/zanchangtong/ptvsri。
translated by 谷歌翻译
时间动作本地化在视频分析中起着重要作用,该视频分析旨在将动作定位和分类在未修剪视频中。先前的方法通常可以预测单个时间尺度的特征空间上的动作。但是,低级量表的时间特征缺乏足够的语义来进行动作分类,而高级尺度则无法提供动作边界的丰富细节。为了解决这个问题,我们建议预测多个颞尺度特征空间的动作。具体而言,我们使用不同尺度的精致特征金字塔将语义从高级尺度传递到低级尺度。此外,为了建立整个视频的长时间尺度,我们使用时空变压器编码器来捕获视频帧的远程依赖性。然后,具有远距离依赖性的精制特征被送入分类器以进行粗糙的动作预测。最后,为了进一步提高预测准确性,我们建议使用框架级别的自我注意模块来完善每个动作实例的分类和边界。广泛的实验表明,所提出的方法可以超越Thumos14数据集上的最先进方法,并在ActivityNet1.3数据集上实现可比性的性能。与A2NET(tip20,avg \ {0.3:0.7 \}),sub-action(csvt2022,avg \ {0.1:0.5 \})和afsd(cvpr21,avg \ {0.3:0.7 \}) ,提出的方法分别可以提高12.6 \%,17.4 \%和2.2 \%
translated by 谷歌翻译